Katja Maquate: Language Evaluation

RePsychLing in SMLP2022

Author

Reinhold Kliegl

Published

September 3, 2022

1 List of issues to be disucssed

A few things that still confuse me and that would be nice to discuss are:

  • The importance of 0-correlation models for random effect-structure reduction
  • Setting REML to FALSE for random effect structure reduction - some do it, some don’t. What do you advice and why?
  • Sometimes a model with REML=FALSE converges, rePCA shows that all the variance is captured by the random effect structure, but when set back to REML=TRUE it doesn’t converge anymore. Why is that and how would you advice to proceed?
  • Getting power estimates for main studies based on pilot data and simulations, i.e., how many participants will I need (the observed-power issue / debate)

2 Background of the data

Do participants take the situational-functional setting into account when evaluating language? How do semantic incongruencies compare to incongruencies in register? Do they interact? Participants were presented with a picture prime of either formally or informally dressed speakers. They then heard that speaker utter a target sentence that either matched or mismatched the register and / or matched of mismatched the semantic verb-argument congruency of the target sentence.

3 Design and variables

  • Design: 2x2, fully crossed Latin square

  • Dependent Variable: Acceptability ratings on a fully-anchored 7-point Likert-type scale: How acceptable is the sentence when spoken by the presented speaker (1=not at all - 7=completely)

  • Independent Variables

    • RC (match vs. mismatch): target sentence final noun either matches or mismatches in register with the prime picture. Example mismatch: prime picture showing formally dressed speaker, target sentence heard: “Ich binde jetzt meinen Latschen.” (lit. transl: I tie now my shoes)
    • SC (yes vs. no): verb and argument in target sentence are either semantically congruent or not. Example mismatch: “Ich binde jetzt meine Klamotten” (lit. transl: I tie now my clothes)

4 Setup

using AlgebraOfGraphics
using Arrow
using CairoMakie       # graphics back-end
using CategoricalArrays
using Chain
using DataFrames
using DataFrameMacros  # simplified dplyr-like data wrangling 
using KernelDensity    # density estimation
using MixedModels
using MixedModelsMakie # diagnostic plots
using ProgressMeter
using Random           # random number generators
using RCall            # call R from Julia
using StatsBase
using StatsModels

using AlgebraOfGraphics: density
using AlgebraOfGraphics: boxplot
using MixedModelsMakie: qqnorm
using MixedModelsMakie: ridgeplot
using MixedModelsMakie: scatter

ProgressMeter.ijulia_behavior(:clear);
CairoMakie.activate!(; type="svg");
dat = DataFrame(Arrow.Table("./data/Maquate_LanguageEvaluation.arrow"));

describe(dat)

5 rows × 7 columns

variablemeanminmedianmaxnmissingeltype
SymbolUnion…AnyUnion…AnyInt64Union
1Subj190Union{Missing, String}
2Item190Union{Missing, String}
3RCmatchmismatch0Union{Missing, String}
4SCnoyes0Union{Missing, String}
5rating3.5427113.070Union{Missing, Int32}

5 Linear mixed models

5.1 Contrasts

contrasts = merge(
      Dict(:RC => EffectsCoding(base= "mismatch"; levels=["match", "mismatch"])),
      Dict(:SC => EffectsCoding(base= "no"; levels=["yes", "no"])),
      Dict(:Subj => Grouping()),
      Dict(:Item => Grouping())
   );

5.2 LMM analysis

Just a low baseline reference LMM varying only the GMs for subjects and items.

m_voi = let
  form = @formula(rating ~ 1 + RC*SC + (1 | Subj) + (1 | Item));
  fit(MixedModel, form, dat; contrasts);
end
Est. SE z p σ_Item σ_Subj
(Intercept) 3.5452 0.1389 25.53 <1e-99 0.4567 0.6348
RC: match 0.1040 0.0470 2.21 0.0270
SC: yes 1.3084 0.0470 27.82 <1e-99
RC: match & SC: yes -0.0168 0.0471 -0.36 0.7215
Residual 1.7331

Katja Maquate’s selection.

m_KM = let
  form = @formula(rating ~ 1 + RC*SC + (1+SC | Subj) + (1+SC | Item));
  fit(MixedModel, form, dat; contrasts);
end

display(issingular(m_KM))
display(m_KM.PCA[:Subj])  # also: MixedModels.PCA(m_KM)[:Subj]
display(m_KM.PCA[:Item])  # also: MixedModels.PCA(m_KM)[:Item]

m_KM
false

Principal components based on correlation matrix
 (Intercept)  1.0  .
 SC: yes      0.3  1.0

Normalized cumulative variances:
[0.6506, 1.0]

Component loadings
                PC1    PC2
 (Intercept)  -0.71  -0.71
 SC: yes      -0.71   0.71

Principal components based on correlation matrix
 (Intercept)  1.0    .
 SC: yes      0.37  1.0

Normalized cumulative variances:
[0.6869, 1.0]

Component loadings
                PC1    PC2
 (Intercept)  -0.71  -0.71
 SC: yes      -0.71   0.71
Est. SE z p σ_Item σ_Subj
(Intercept) 3.5458 0.1394 25.44 <1e-99 0.4555 0.6488
RC: match 0.1059 0.0432 2.45 0.0143
SC: yes 1.3090 0.1219 10.74 <1e-26 0.3481 0.5821
RC: match & SC: yes -0.0160 0.0432 -0.37 0.7115
Residual 1.5912

Maximal LMM (for this design)

m_max = let 
    form = @formula(rating ~ 1 + RC*SC + (1+SC*RC | Subj) + (1+SC*RC | Item));

    fit(MixedModel, form, dat; contrasts);
  end 

display(issingular(m_max))
display(m_max.PCA[:Subj]) 
display(m_max.PCA[:Item]) 

VarCorr(m_max) 
Minimizing 388   Time: 0:00:00 ( 0.73 ms/it)
true

Principal components based on correlation matrix
 (Intercept)           1.0     .      .      .
 SC: yes               0.29   1.0     .      .
 RC: match            -0.36   0.17   1.0     .
 SC: yes & RC: match  -0.21  -0.03   0.95   1.0

Normalized cumulative variances:
[0.5261, 0.837, 1.0, 1.0]

Component loadings
                        PC1   PC2    PC3    PC4
 (Intercept)          -0.35  0.57   0.72  -0.18
 SC: yes              -0.0   0.79  -0.58   0.19
 RC: match             0.68  0.18   0.01  -0.72
 SC: yes & RC: match   0.65  0.12   0.37   0.65

Principal components based on correlation matrix
 (Intercept)           1.0     .      .      .
 SC: yes               0.38   1.0     .      .
 RC: match             0.18   0.78   1.0     .
 SC: yes & RC: match  -0.0    0.66   0.98   1.0

Normalized cumulative variances:
[0.6687, 0.9315, 1.0, 1.0]

Component loadings
                        PC1    PC2    PC3    PC4
 (Intercept)          -0.19   0.91  -0.36   0.09
 SC: yes              -0.54   0.2    0.81   0.11
 RC: match            -0.6   -0.15  -0.26  -0.74
 SC: yes & RC: match  -0.56  -0.33  -0.39   0.65
Column Variance Std.Dev Corr.
Item (Intercept) 0.2200923 0.4691400
SC: yes 0.1223842 0.3498346 +0.38
RC: match 0.0110046 0.1049030 +0.18 +0.78
SC: yes & RC: match 0.0021880 0.0467757 -0.00 +0.66 +0.98
Subj (Intercept) 0.4259580 0.6526546
SC: yes 0.3373426 0.5808121 +0.29
RC: match 0.0368008 0.1918353 -0.36 +0.17
SC: yes & RC: match 0.0720661 0.2684514 -0.21 -0.03 +0.95
Residual 2.4005589 1.5493737

Overparameterized according to all criteria. Force the correlation parameters to zero.

m_zcp = let 
    form = @formula(rating ~ 1 + RC*SC + zerocorr(1+SC*RC | Subj) + zerocorr(1+SC*RC | Item));

    fit(MixedModel, form, dat; contrasts);
  end 

display(issingular(m_zcp))
display(m_zcp.PCA[:Subj]) 
display(m_zcp.PCA[:Item]) 
display(VarCorr(m_zcp))
true

Principal components based on correlation matrix
 (Intercept)          1.0  .    .    .
 SC: yes              0.0  1.0  .    .
 RC: match            0.0  0.0  1.0  .
 SC: yes & RC: match  0.0  0.0  0.0  1.0

Normalized cumulative variances:
[0.25, 0.5, 0.75, 1.0]

Component loadings
                       PC1   PC2   PC3   PC4
 (Intercept)          1.0   0.0   0.0   0.0
 SC: yes              0.0   1.0   0.0   0.0
 RC: match            0.0   0.0   1.0   0.0
 SC: yes & RC: match  0.0   0.0   0.0   1.0

Principal components based on correlation matrix
 (Intercept)          1.0  .    .    .
 SC: yes              0.0  1.0  .    .
 RC: match            0.0  0.0  1.0  .
 SC: yes & RC: match  0.0  0.0  0.0  0.0

Normalized cumulative variances:
[0.3333, 0.6667, 1.0, 1.0]

Component loadings
                       PC1   PC2   PC3     PC4
 (Intercept)          1.0   0.0   0.0     0.0
 SC: yes              0.0   0.0   1.0     0.0
 RC: match            0.0   1.0   0.0     0.0
 SC: yes & RC: match  0.0   0.0   0.0   NaN
Column Variance Std.Dev Corr.
Item (Intercept) 0.2120894 0.4605316
SC: yes 0.1237099 0.3517242 .
RC: match 0.0043949 0.0662939 . .
SC: yes & RC: match 0.0000000 0.0000000 . . .
Subj (Intercept) 0.4207147 0.6486253
SC: yes 0.3419761 0.5847872 .
RC: match 0.0172534 0.1313521 . .
SC: yes & RC: match 0.0636905 0.2523697 . . .
Residual 2.4389531 1.5617148

There is no evidence for Item-related VC for interaction and Item-related VC for RC is also very small compared to the other VCs. This causes overparameterization. We remove them from the model

m_prsm = let 
    form = @formula(rating ~ 1 + RC*SC + zerocorr(1+SC*RC | Subj) + zerocorr(1+SC | Item));

    fit(MixedModel, form, dat; contrasts);
  end 

display(issingular(m_prsm))
display(m_prsm.PCA[:Subj]) 
display(m_prsm.PCA[:Item]) 

display(VarCorr(m_prsm)) 

display(lrtest(m_prsm, m_zcp, m_max))

lrtest(m_prsm, m_max)

m_prsm
false

Principal components based on correlation matrix
 (Intercept)          1.0  .    .    .
 SC: yes              0.0  1.0  .    .
 RC: match            0.0  0.0  1.0  .
 SC: yes & RC: match  0.0  0.0  0.0  1.0

Normalized cumulative variances:
[0.25, 0.5, 0.75, 1.0]

Component loadings
                       PC1   PC2   PC3   PC4
 (Intercept)          1.0   0.0   0.0   0.0
 SC: yes              0.0   1.0   0.0   0.0
 RC: match            0.0   0.0   0.0   1.0
 SC: yes & RC: match  0.0   0.0   1.0   0.0

Principal components based on correlation matrix
 (Intercept)  1.0  .
 SC: yes      0.0  1.0

Normalized cumulative variances:
[0.5, 1.0]

Component loadings
               PC1   PC2
 (Intercept)  0.0   1.0
 SC: yes      1.0   0.0
Column Variance Std.Dev Corr.
Subj (Intercept) 0.421039 0.648875
SC: yes 0.342744 0.585443 .
RC: match 0.017032 0.130507 . .
SC: yes & RC: match 0.064388 0.253748 . . .
Item (Intercept) 0.211981 0.460414
SC: yes 0.123834 0.351901 .
Residual 2.443138 1.563054
Likelihood-ratio test: 3 models fitted on 1358 observations
─────────────────────────────────────────────────────────
     DOF  ΔDOF      LogLik   Deviance    Chisq  p(>Chisq)
─────────────────────────────────────────────────────────
[1]   11        -2662.3274  5324.6548                    
[2]   13     2  -2662.2985  5324.5969   0.0579     0.9715
[3]   25    12  -2651.6021  5303.2041  21.3928     0.0449
─────────────────────────────────────────────────────────
Est. SE z p σ_Subj σ_Item
(Intercept) 3.5456 0.1396 25.40 <1e-99 0.6489 0.4604
RC: match 0.1059 0.0480 2.21 0.0274 0.1305
SC: yes 1.3089 0.1224 10.70 <1e-25 0.5854 0.3519
RC: match & SC: yes -0.0171 0.0608 -0.28 0.7784
SC: yes & RC: match 0.2537
Residual 1.5631

The LMM is defensible. It does not fit worse than m_max.

Let’s check the extension with CPs for this reduced version.

m_ext = let 
    form = @formula(rating ~ 1 + RC*SC + zerocorr(1+SC*RC | Subj) + (1+SC | Item));

    fit(MixedModel, form, dat; contrasts);
  end 

display(issingular(m_ext))
display(m_ext.PCA[:Subj]) 
display(m_ext.PCA[:Item]) 

display(VarCorr(m_ext)) 

display(lrtest( m_prsm, m_ext, m_max))
false

Principal components based on correlation matrix
 (Intercept)          1.0  .    .    .
 SC: yes              0.0  1.0  .    .
 RC: match            0.0  0.0  1.0  .
 SC: yes & RC: match  0.0  0.0  0.0  1.0

Normalized cumulative variances:
[0.25, 0.5, 0.75, 1.0]

Component loadings
                       PC1   PC2   PC3   PC4
 (Intercept)          1.0   0.0   0.0   0.0
 SC: yes              0.0   1.0   0.0   0.0
 RC: match            0.0   0.0   1.0   0.0
 SC: yes & RC: match  0.0   0.0   0.0   1.0

Principal components based on correlation matrix
 (Intercept)  1.0    .
 SC: yes      0.37  1.0

Normalized cumulative variances:
[0.6848, 1.0]

Component loadings
                PC1    PC2
 (Intercept)  -0.71  -0.71
 SC: yes      -0.71   0.71
Column Variance Std.Dev Corr.
Subj (Intercept) 0.423118 0.650475
SC: yes 0.341108 0.584044 .
RC: match 0.018667 0.136626 . .
SC: yes & RC: match 0.063805 0.252596 . . .
Item (Intercept) 0.212095 0.460537
SC: yes 0.124316 0.352585 +0.37
Residual 2.441789 1.562622
Likelihood-ratio test: 3 models fitted on 1358 observations
─────────────────────────────────────────────────────────
     DOF  ΔDOF      LogLik   Deviance    Chisq  p(>Chisq)
─────────────────────────────────────────────────────────
[1]   11        -2662.3274  5324.6548                    
[2]   12     1  -2661.0422  5322.0844   2.5705     0.1089
[3]   25    13  -2651.6021  5303.2041  18.8802     0.1269
─────────────────────────────────────────────────────────

No, the CP is not needed. How about the Subj-related CPs?

m_ext2 = let 
    form = @formula(rating ~ 1 + RC*SC + (1+SC*RC | Subj) + zerocorr(1+SC | Item));

    fit(MixedModel, form, dat; contrasts);
  end 

display(issingular(m_ext2))
display(m_ext2.PCA[:Subj]) 
display(m_ext2.PCA[:Item]) 

display(VarCorr(m_ext2)) 

display(lrtest( m_prsm, m_ext2, m_max))
Minimizing 248   Time: 0:00:00 ( 0.50 ms/it)
true

Principal components based on correlation matrix
 (Intercept)           1.0     .      .      .
 SC: yes               0.3    1.0     .      .
 RC: match            -0.77   0.37   1.0     .
 SC: yes & RC: match  -0.27   0.01   0.27   1.0

Normalized cumulative variances:
[0.4836, 0.7898, 1.0, 1.0]

Component loadings
                        PC1    PC2    PC3    PC4
 (Intercept)          -0.64   0.33  -0.28   0.63
 SC: yes               0.06   0.9   -0.03  -0.43
 RC: match             0.66   0.27   0.26   0.65
 SC: yes & RC: match   0.38  -0.06  -0.92  -0.0

Principal components based on correlation matrix
 (Intercept)  1.0  .
 SC: yes      0.0  1.0

Normalized cumulative variances:
[0.5, 1.0]

Component loadings
               PC1   PC2
 (Intercept)  1.0   0.0
 SC: yes      0.0   1.0
Column Variance Std.Dev Corr.
Subj (Intercept) 0.4199073 0.6480025
SC: yes 0.3410349 0.5839819 +0.30
RC: match 0.0099450 0.0997245 -0.77 +0.37
SC: yes & RC: match 0.0642978 0.2535702 -0.27 +0.01 +0.27
Item (Intercept) 0.2103391 0.4586274
SC: yes 0.1220479 0.3493535 .
Residual 2.4534411 1.5663464
Likelihood-ratio test: 3 models fitted on 1358 observations
─────────────────────────────────────────────────────────
     DOF  ΔDOF      LogLik   Deviance    Chisq  p(>Chisq)
─────────────────────────────────────────────────────────
[1]   11        -2662.3274  5324.6548                    
[2]   17     6  -2659.2234  5318.4468   6.2081     0.4003
[3]   25     8  -2651.6021  5303.2041  15.2427     0.0546
─────────────────────────────────────────────────────────

No, they are not needed either.

6 Compare models

LMM p_rsm is a defensible selection. It fits the same number of model parameters as Katja Maquate’sm_KM; the models are not nested. Therefore,the LRT does not work. Let’s compare them anyway with AIC and BIC statistics.

display(coeftable(m_KM))
display(coeftable(m_prsm))

let mods = [m_voi, m_KM, m_prsm, m_max];
 DataFrame(;
    model=[:m_voi, :m_KM, :m_prsm, :m_max],
    pars=dof.(mods),
    geomdof=round.(Int, (sum  leverage).(mods)),
    AIC=round.(Int, aic.(mods)),
    AICc=round.(Int, aicc.(mods)),
    BIC=round.(Int, bic.(mods)),
  )
end
Coef. Std. Error z Pr(>
(Intercept) 3.54578 0.1394 25.44 <1e-99
RC: match 0.105913 0.0432313 2.45 0.0143
SC: yes 1.30902 0.121901 10.74 <1e-26
RC: match & SC: yes -0.0159927 0.0432412 -0.37 0.7115
Coef. Std. Error z Pr(>
(Intercept) 3.54564 0.13958 25.40 <1e-99
RC: match 0.105916 0.04801 2.21 0.0274
SC: yes 1.30887 0.122377 10.70 <1e-25
RC: match & SC: yes -0.0171132 0.0608231 -0.28 0.7784

4 rows × 6 columns

modelparsgeomdofAICAICcBIC
SymbolInt64Int64Int64Int64Int64
1m_voi759547254725508
2m_KM11110535353535410
3m_prsm11137534753475404
4m_max25135535353545484

AIC and BIC are 6 points smaller for m_prsm than m_KM. Using the rule of thumb that 5 points smaller indicates a better fit, according to these statistics we select LMM m_prsm. There is no relevant difference in fixed-effect estimates between the two models. In an RES-exploratory setting, we still may want to test the significance the VCs of both models and the CPs of LMM m_KM.

7 Observation-level residual diagnostics

7.1 Residual-over-fitted plot

The slant in residuals show a lower and upper boundary of ratings, that is we have have too few short and too few long residuals. Not ideal.

Code
scatter(fitted(m_prsm), residuals(m_prsm))

Figure 1: Residuals versus fitted values for model m1

Contour plots or heatmaps may be an alternative.

Code
set_aog_theme!()
draw(
  data((; f=fitted(m_prsm), r=residuals(m_prsm))) *
  mapping(
    :f => "Fitted values from m_prsm", :r => "Residuals from m_prsm"
  ) *
  density();
)

Figure 2: Heatmap of residuals versus fitted values for model m_prsm

7.2 Q-Q plot

The plot of quantiles of model residuals over corresponding quantiles of the normal distribution should yield a straight line along the main diagonal.

Code
qqnorm(m_prsm; qqline=:none)

Figure 3: ?(caption)

This looks very good.

7.3 Observed and theoretical normal distribution

Overall, it does not look too bad.

Code
let
  n = nrow(dat)
  dat_rz = (;
    value=vcat(residuals(m_prsm) ./ std(residuals(m_prsm)), randn(n)),
    curve=repeat(["residual", "normal"]; inner=n),
  )
  draw(
    data(dat_rz) *
    mapping(:value; color=:curve) *
    density(; bandwidth=0.1);
  )
end

Figure 4: Kernel density plot of the standardized residuals for model m1 versus a standard normal

8 Conditional modes of random effects

8.1 Caterpillar plot

8.2 Shrinkage plot

Here we use LMM m_KM.

Code
shrinkageplot!(Figure(; resolution=(1000, 1200)), m_KM)

Figure 7: Shrinkage plots of the subject random effects in model m_KM

9 Parametric bootstrap

Using LMM m_KM we:

  • generate a bootstrap sample
  • compute shortest covergage intervals for the LMM parameters
  • plot densities of bootstrapped parameter estimates for residual, fixed effects, variance components, and correlation parameters

9.1 Generate a bootstrap sample

We generate 2500 samples for the 11 model parameters (4 fixed effect, 4 VCs, 2 CPs, and 1 residual).

Random.seed!(1234321)
samp = parametricbootstrap(2500, m_KM);
dat2 = DataFrame(samp.allpars)
first(dat2, 10)

10 rows × 5 columns

itertypegroupnamesvalue
Int64StringString?String?Float64
11βmissing(Intercept)3.63605
21βmissingRC: match0.0837241
31βmissingSC: yes1.31892
41βmissingRC: match & SC: yes-0.00712395
51σItem(Intercept)0.405821
61σItemSC: yes0.269712
71ρItem(Intercept), SC: yes0.534839
81σSubj(Intercept)0.673444
91σSubjSC: yes0.433203
101ρSubj(Intercept), SC: yes0.423078
nrow(dat2) # 2500 estimates for each of 15 model parameters
27500

9.2 Shortest coverage interval

DataFrame(shortestcovint(samp))

11 rows × 5 columns

typegroupnameslowerupper
StringString?String?Float64Float64
1βmissing(Intercept)3.268623.79615
2βmissingRC: match0.02823360.197728
3βmissingSC: yes1.070021.53522
4βmissingRC: match & SC: yes-0.09510340.0707589
5σItem(Intercept)0.3098710.595175
6σItemSC: yes0.2155720.461331
7ρItem(Intercept), SC: yes-0.05690880.847254
8σSubj(Intercept)0.4595280.812702
9σSubjSC: yes0.4103940.743732
10ρSubj(Intercept), SC: yes-0.1096810.633857
11σresidualmissing1.528141.65084

We can also visualize the shortest coverage intervals for fixed effects with the ridgeplot() command:

Code
ridgeplot(samp; show_intercept=false)

Figure 8: Ridge plot of fixed-effects bootstrap samples from model m_KM

9.3 Comparative density plots of bootstrapped parameter estimates

9.3.1 Residual

Code
draw(
  data(@subset(dat2, :type == "σ" && :group == "residual")) *
  mapping(:value => "Residual") *
  density();
  figure=(; resolution=(800, 400)),
)

Figure 9: Kernel density estimate from bootstrap samples of the residual standard deviation for model m_KM

9.3.2 Fixed effects (w/o GM)

Code
rn = renamer([
  "(Intercept)" => "GM",
  "SC: yes" => "Semantic congruity effect",
  "RC: match" => "Register congruity effect",
  "RC: match & SC: yes" => "SC x RC interaction effect",
  "(Intercept), SC: yes" => "CP for GM and SC effect" 
])
draw(
  data(@subset(dat2, :type == "β" && :names  "(Intercept)")) *
  mapping(
    :value => "Experimental effect size [ms]";
    color=:names => rn => "Experimental effects",
  ) *
  density();
  figure=(; resolution=(800, 350)),
)

Figure 10: Kernel density estimate from bootstrap samples of the fixed effects for model m1L

The densitiies correspond nicely with the shortest coverage intervals.

9.3.3 VCs and CP for Subj

Code
draw(
  data(@subset(dat2, :type == "σ" && :group == "Subj" )) *
  mapping(
    :value => "Standard deviations [ms]";
    color=:names => rn => "Variance components",
  ) *
  density();
  figure=(; resolution=(800, 350)),
)

Figure 11: Kernel density estimate from bootstrap samples of the Subj-related VCs for model m_KM

Code
draw(
  data(@subset(dat2, :type == "ρ" && :group == "Subj")) *
  mapping(
    :value => "Correlation";
    color=:names => rn => "Correlation parameter",
  ) *
  density();
  figure=(; resolution=(800, 350)),
)

Figure 12: Kernel density estimate from bootstrap samples of Subj-related CP for model m1_KM

The VC are very nicely defined; the CP not so much.

9.3.4 VCs and CP for Item

Code
draw(
  data(@subset(dat2, :type == "σ" && :group == "Item" )) *
  mapping(
    :value => "Standard deviations [ms]";
    color=:names => rn => "Variance components",
  ) *
  density();
  figure=(; resolution=(800, 350)),
)

Figure 13: Kernel density estimate from bootstrap samples of the Item-related VCs for model m_KM

Code
draw(
  data(@subset(dat2, :type == "ρ" && :group == "Item")) *
  mapping(
    :value => "Correlation";
    color=:names => rn => "Correlation parameter",
  ) *
  density();
  figure=(; resolution=(800, 350)),
)

Figure 14: ?(caption)

The VC are very nicely defined; the CP chunk encounters missing values. Need to follow up.

10 Appendix

versioninfo()
Julia Version 1.8.0
Commit 5544a0fab76 (2022-08-17 13:38 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin21.4.0)
  CPU: 12 × Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, skylake)
  Threads: 1 on 6 virtual cores